An optimal randomized incremental gradient method

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An optimal randomized incremental gradient method

In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of m (≥ 1) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a pr...

متن کامل

An Effective Gradient Projection Method for Stochastic Optimal Control

In this work, we propose a simple yet effective gradient projection algorithm for a class of stochastic optimal control problems. The basic iteration block is to compute gradient projection of the objective functional by solving the state and co-state equations via some Euler methods and by using the Monte Carlo simulations. Convergence properties are discussed and extensive numerical tests are...

متن کامل

The Gradient Projection Method for Solving an Optimal Control Problem

A gradient method for solving an optimal control problem described by a parabolic equation is considered. The gradient projection method is applied to solve the problem. The convergence of the projection algorithm is investigated.

متن کامل

Finding Optimal Plans for Incremental Method Engineering

Incremental method engineering proposes to evolve the information systems development methods of a software company through a step-wise improvement process. In practice, this approach proved to be effective for reducing the risks of failure while introducing method changes. However, little attention has been paid to the important problem of identifying an adequate plan for implementing the chan...

متن کامل

A Convergent Incremental Gradient Method with a Constant Step Size

Abstract. An incremental gradient method for minimizing a sum of continuously differentiable functions is presented. The method requires a single gradient evaluation per iteration and uses a constant step size. For the case that the gradient is bounded and Lipschitz continuous, we show that the method visits regions in which the gradient is small infinitely often. Under certain unimodality assu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2017

ISSN: 0025-5610,1436-4646

DOI: 10.1007/s10107-017-1173-0